A group of tech companies are creating an industry body to ensure that AI models are safe. (AP)Gadgets 

Industry Watchdog Established by AI Pioneers Amid Heightened Government Oversight

In response to demands for regulating the development of artificial intelligence, a consortium of technology firms, including Google from Alphabet Inc. and OpenAI Inc., is establishing an industry organization to guarantee the safety of AI models.

The effort, which is also backed by artificial intelligence startups Anthropic and Microsoft Corp., aims to strengthen the expertise of member companies and create industry benchmarks, according to a Wednesday release. The group, known as the Frontier Model Forum, said it would welcome participation from other organizations working on large-scale machine learning platforms.

The rapid proliferation of creative AI tools like OpenAI’s ChatGPT, which can generate text, photos and even videos based on simple prompts, has pushed tech giants to tread carefully. Companies participating in the Frontier Model Forum have already agreed to put in place safeguards – at the urging of the White House – before Congress potentially passes binding regulations.

“This is urgent work, and this forum is well positioned to act quickly to advance the state of AI security,” Anna Makanju, vice president of global affairs at OpenAI, said in a statement.

In the coming months, the forum plans to form an advisory board to assess priorities and hopes to establish a charter, governance system and funding at the forefront of the work. The group also hopes to collaborate with existing initiatives, including the Partnership on AI and MLCommons.

Companies have long used machine learning to power search and social media functions, but the latest generation of AI models have provided a glimpse of the human-like intelligence these systems are approaching.

Governments around the world have promised to cooperate to combat the dangers of artificial intelligence. It includes the Group of Seven countries, which is planning an international AI summit in Great Britain. For now, however, AI policing is at the company’s discretion in the United States.

“This initiative is an important step in bringing the technology sector together to responsibly advance artificial intelligence and address challenges to benefit all of humanity,” Microsoft President Brad Smith said in a statement.

Related posts

Leave a Comment